34 research outputs found

    The perirhinal cortex and conceptual processing: Effects of feature-based statistics following damage to the anterior temporal lobes.

    Get PDF
    The anterior temporal lobe (ATL) plays a prominent role in models of semantic knowledge, although it remains unclear how the specific subregions within the ATL contribute to semantic memory. Patients with neurodegenerative diseases, like semantic dementia, have widespread damage to the ATL thus making inferences about the relationship between anatomy and cognition problematic. Here we take a detailed anatomical approach to ask which substructures within the ATL contribute to conceptual processing, with the prediction that the perirhinal cortex (PRc) will play a critical role for concepts that are more semantically confusable. We tested two patient groups, those with and without damage to the PRc, across two behavioural experiments - picture naming and word-picture matching. For both tasks, we manipulated the degree of semantic confusability of the concepts. By contrasting the performance of the two groups, along with healthy controls, we show that damage to the PRc results in worse performance in processing concepts with higher semantic confusability across both experiments. Further by correlating the degree of damage across anatomically defined regions of interest with performance, we find that PRc damage is related to performance for concepts with increased semantic confusability. Our results show that the PRc supports a necessary and crucial neurocognitve function that enables fine-grained conceptual processes to take place through the resolution of semantic confusability.This work was supported by funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013)/ERC Grant agreement no. 249640 to LKT.This is the final published version. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.neuropsychologia.2015.01.04

    Predicting the Time Course of Individual Objects with MEG.

    Get PDF
    To respond appropriately to objects, we must process visual inputs rapidly and assign them meaning. This involves highly dynamic, interactive neural processes through which information accumulates and cognitive operations are resolved across multiple time scales. However, there is currently no model of object recognition which provides an integrated account of how visual and semantic information emerge over time; therefore, it remains unknown how and when semantic representations are evoked from visual inputs. Here, we test whether a model of individual objects--based on combining the HMax computational model of vision with semantic-feature information--can account for and predict time-varying neural activity recorded with magnetoencephalography. We show that combining HMax and semantic properties provides a better account of neural object representations compared with the HMax alone, both through model fit and classification performance. Our results show that modeling and classifying individual objects is significantly improved by adding semantic-feature information beyond ∼200 ms. These results provide important insights into the functional properties of visual processing across time.This is the final version. It was first published by OUP in Cerebral Cortex at http://cercor.oxfordjournals.org/content/early/2014/09/09/cercor.bhu203.long

    The Centre for Speech, Language and the Brain (CSLB) concept property norms.

    Get PDF
    Theories of the representation and processing of concepts have been greatly enhanced by models based on information available in semantic property norms. This information relates both to the identity of the features produced in the norms and to their statistical properties. In this article, we introduce a new and large set of property norms that are designed to be a more flexible tool to meet the demands of many different disciplines interested in conceptual knowledge representation, from cognitive psychology to computational linguistics. As well as providing all features listed by 2 or more participants, we also show the considerable linguistic variation that underlies each normalized feature label and the number of participants who generated each variant. Our norms are highly comparable with the largest extant set (McRae, Cree, Seidenberg, & McNorgan, 2005) in terms of the number and distribution of features. In addition, we show how the norms give rise to a coherent category structure. We provide these norms in the hope that the greater detail available in the Centre for Speech, Language and the Brain norms should further promote the development of models of conceptual knowledge. The norms can be downloaded at www.csl.psychol.cam.ac.uk/propertynorms

    From Perception to Conception: How Meaningful Objects Are Processed over Time

    Get PDF
    To recognize visual objects, our sensory perceptions are transformed through dynamic neural interactions into meaningful representations of the world but exactly how visual inputs invoke object meaning remains unclear. To address this issue, we apply a regression approach to magnetoencephalography data, modeling perceptual and conceptual variables. Key conceptual measures were derived from semantic feature-based models claiming shared features (e.g., has eyes) provide broad category information, while distinctive features (e.g., has a hump) are additionally required for more specific object identification. Our results show initial perceptual effects in visual cortex that are rapidly followed by semantic feature effects throughout ventral temporal cortex within the first 120 ms. Moreover, these early semantic effects reflect shared semantic feature information supporting coarse category-type distinctions. Post-200 ms, we observed the effects along the extent of ventral temporal cortex for both shared and distinctive features, which together allow for conceptual differentiation and object identification. By relating spatiotemporal neural activity to statistical feature-based measures of semantic knowledge, we demonstrate that qualitatively different kinds of perceptual and semantic information are extracted from visual objects over time, with rapid activation of shared object features followed by concomitant activation of distinctive features that together enable meaningful visual object recognitio

    Feature Statistics Modulate the Activation of Meaning During Spoken Word Processing.

    Get PDF
    Understanding spoken words involves a rapid mapping from speech to conceptual representations. One distributed feature-based conceptual account assumes that the statistical characteristics of concepts' features--the number of concepts they occur in (distinctiveness/sharedness) and likelihood of co-occurrence (correlational strength)--determine conceptual activation. To test these claims, we investigated the role of distinctiveness/sharedness and correlational strength in speech-to-meaning mapping, using a lexical decision task and computational simulations. Responses were faster for concepts with higher sharedness, suggesting that shared features are facilitatory in tasks like lexical decision that require access to them. Correlational strength facilitated responses for slower participants, suggesting a time-sensitive co-occurrence-driven settling mechanism. The computational simulation showed similar effects, with early effects of shared features and later effects of correlational strength. These results support a general-to-specific account of conceptual processing, whereby early activation of shared features is followed by the gradual emergence of a specific target representation.This work was supported by a European Research Council Advanced Investigator grant (under the European Community's Seventh Framework Programme (FP7/2007-2013/ ERC Grant agreement no 249640) to LKT, and a Marie Curie Intra-European Fellowship and Swiss National Science Foundation Ambizione Fellowship to KIT. We thank Ken McRae and colleagues for making their property norm data available. We are very grateful to George Cree and Chris McNorgan for providing us with the MikeNet implementation of their model.This is the final published version. It first appeared at http://dx.doi.org/10.1111/cogs.1223

    Reorganization of syntactic processing following left-hemisphere brain damage: does right-hemisphere activity preserve function?

    Get PDF
    The extent to which the human brain shows evidence of functional plasticity across the lifespan has been addressed in the context of pathological brain changes and, more recently, of the changes that take place during healthy ageing. Here we examine the potential for plasticity by asking whether a strongly left-lateralized system can successfully reorganize to the right-hemisphere following left-hemisphere brain damage. To do this, we focus on syntax, a key linguistic function considered to be strongly left-lateralized, combining measures of tissue integrity, neural activation and behavioural performance. In a functional neuroimaging study participants heard spoken sentences that differentially loaded on syntactic and semantic information. While healthy controls activated a left-hemisphere network of correlated activity including Brodmann areas 45/47 and posterior middle temporal gyrus during syntactic processing, patients activated Brodmann areas 45/47 bilaterally and right middle temporal gyrus. However, voxel-based morphometry analyses showed that only tissue integrity in left Brodmann areas 45/47 was correlated with activity and performance; poor tissue integrity in left Brodmann area 45 was associated with reduced functional activity and increased syntactic deficits. Activity in the right-hemisphere was not correlated with damage in the left-hemisphere or with performance. Reduced neural integrity in the left-hemisphere through brain damage or healthy ageing results in increased right-hemisphere activation in homologous regions to those left-hemisphere regions typically involved in the young. However, these regions do not support the same linguistic functions as those in the left-hemisphere and only indirectly contribute to preserved syntactic capacity. This establishes the unique role of the left hemisphere in syntax, a core component in human language

    Balancing Prediction and Sensory Input in Speech Comprehension: The Spatiotemporal Dynamics of Word Recognition in Context.

    Get PDF
    Spoken word recognition in context is remarkably fast and accurate, with recognition times of ∼200 ms, typically well before the end of the word. The neurocomputational mechanisms underlying these contextual effects are still poorly understood. This study combines source-localized electroencephalographic and magnetoencephalographic (EMEG) measures of real-time brain activity with multivariate representational similarity analysis to determine directly the timing and computational content of the processes evoked as spoken words are heard in context, and to evaluate the respective roles of bottom-up and predictive processing mechanisms in the integration of sensory and contextual constraints. Male and female human participants heard simple (modifier-noun) English phrases that varied in the degree of semantic constraint that the modifier (W1) exerted on the noun (W2), as in pairs, such as "yellow banana." We used gating tasks to generate estimates of the probabilistic predictions generated by these constraints as well as measures of their interaction with the bottom-up perceptual input for W2. Representation similarity analysis models of these measures were tested against electroencephalographic and magnetoencephalographic brain data across a bilateral fronto-temporo-parietal language network. Consistent with probabilistic predictive processing accounts, we found early activation of semantic constraints in frontal cortex (LBA45) as W1 was heard. The effects of these constraints (at 100 ms after W2 onset in left middle temporal gyrus and at 140 ms in left Heschl's gyrus) were only detectable, however, after the initial phonemes of W2 had been heard. Within an overall predictive processing framework, bottom-up sensory inputs are still required to achieve early and robust spoken word recognition in context.SIGNIFICANCE STATEMENT Human listeners recognize spoken words in natural speech contexts with remarkable speed and accuracy, often identifying a word well before all of it has been heard. In this study, we investigate the brain systems that support this important capacity, using neuroimaging techniques that can track real-time brain activity during speech comprehension. This makes it possible to locate the brain areas that generate predictions about upcoming words and to show how these expectations are integrated with the evidence provided by the speech being heard. We use the timing and localization of these effects to provide the most specific account to date of how the brain achieves an optimal balance between prediction and sensory input in the interpretation of spoken language

    Preserving syntactic processing across the adult life span: the modulation of the frontotemporal language system in the context of age-related atrophy.

    Get PDF
    Although widespread neural atrophy is an inevitable consequence of normal aging, not all cognitive abilities decline as we age. For example, spoken language comprehension tends to be preserved, despite atrophy in neural regions involved in language function. Here, we combined measures of behavior, functional activation, and gray matter (GM) change in a younger (19-34 years) and older group (49-86 years) of participants to identify the mechanisms leading to preserved language comprehension across the adult life span. We focussed primarily on syntactic functions because these are strongly left lateralized, providing the potential for contralateral recruitment. In an functional magnetic resonance imaging study, we used a word-monitoring task to minimize working memory demands, manipulating the availability of semantics and syntax to ask whether syntax is preserved in aging because of the functional recruitment of other brain regions, which successfully compensate for neural atrophy. Performance in the older group was preserved despite GM loss. This preservation was related to increased activity in right hemisphere frontotemporal regions, which was associated with age-related atrophy in the left hemisphere frontotemporal network activated in the young. We argue that preserved syntactic processing across the life span is due to the shift from a primarily left hemisphere frontotemporal system to a bilateral functional language network
    corecore